87 research outputs found

    Type-Directed Weaving of Aspects for Higher-order Functional Languages

    Get PDF
    Aspect-oriented programming (AOP) has been shown to be a useful model for software development. Special care must be taken when we try to adapt AOP to strongly typed functional languages which come with features like a type inference mechanism, polymorphic types, higher-order functions and type-scoped pointcuts. Our main contribution lies in a seamless integration of these two paradigms through a static weaving process which deals with around advices with type-scoped pointcuts in the presence of higher-order functions. We give a source-level type inference system for a higher-order, polymorphic language coupled with type-scoped pointcuts. The type system ensures that base programs are oblivious to the type of around advices. We present a type-directed translation scheme which resolves all advice applications at static time. The translation removes advice declarations from source programs and produces translated code which is typable in the Hindley-Milner system

    On the Pursuit of Static and Coherent Weaving

    Get PDF
    Aspect-oriented programming (AOP) has been shown to be a useful model for software development. Special care must be taken when we try to adapt AOP to strongly typed functional languages which come with features like type inference mechanism, polymorphic types, higher-order functions and type-scoped pointcuts. Specifically, it is highly desirable that weaving of aspect-oriented functional programs can be performed statically and coherently. In [13], we showed a type-directed weaver which resolves all advice chainings coherently at static time. The novelty of this paper lies in the extended framework which supports static and coherent weaving in the presence of polymorphic recursive functions, advising advice bodies and higher-order advices

    Expediting Neural Network Verification via Network Reduction

    Full text link
    A wide range of verification methods have been proposed to verify the safety properties of deep neural networks ensuring that the networks function correctly in critical applications. However, many well-known verification tools still struggle with complicated network architectures and large network sizes. In this work, we propose a network reduction technique as a pre-processing method prior to verification. The proposed method reduces neural networks via eliminating stable ReLU neurons, and transforming them into a sequential neural network consisting of ReLU and Affine layers which can be handled by the most verification tools. We instantiate the reduction technique on the state-of-the-art complete and incomplete verification tools, including alpha-beta-crown, VeriNet and PRIMA. Our experiments on a large set of benchmarks indicate that the proposed technique can significantly reduce neural networks and speed up existing verification tools. Furthermore, the experiment results also show that network reduction can improve the availability of existing verification tools on many networks by reducing them into sequential neural networks

    SMArTIC: Towards Building an Accurate, Robust and Scalable Specification Miner

    Get PDF
    10.1145/1181775.1181808Proceedings of the ACM SIGSOFT Symposium on the Foundations of Software Engineering265-27

    Towards Better Quality Specification Miners

    Get PDF

    Mining Patterns and Rules for Software Specification Discovery

    Get PDF
    Proceedings of the VLDB Endowment121609-161

    Mining Modal Scenarios from Execution Traces

    Get PDF
    10.1145/1297846.1297883Proceedings of the Conference on Object-Oriented Programming Systems, Languages, and Applications, OOPSLA777-77

    Non-Redundant Sequential Rules - Theory and Algorithm

    Get PDF
    A sequential rule expresses a relationship between two series of events happening one after another. Sequential rules are potentially useful for analyzing data in sequential format, ranging from purchase histories, network logs and program execution traces. In this work, we investigate and propose a syntactic characterization of a non-redundant set of sequential rules built upon past work on compact set of representative patterns. A rule is redundant if it can be inferred from another rule having the same support and confidence. When using the set of mined rules as a composite filter, replacing a full set of rules with a non-redundant subset of the rules does not impact the accuracy of the filter. We consider several rule sets based on composition of various types of pattern sets – generators, projected-database generators, closed patterns and projected-database closed patterns. We investigate the completeness and tightness of these rule sets. We characterize a tight and complete set of non
    corecore